2 |
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
ANLIzing the Adversarial Natural Language Inference Dataset
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Investigating Failures of Automatic Translation in the Case of Unambiguous Gender ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 9 (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
20 |
Pareto Probing: Trading Off Accuracy for Complexity
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
Abstract:
The question of how to probe contextual word representations in a way that is principled and useful has seen significant recent attention. In our contribution to this discussion, we argue, first, for a probe metric that reflects the trade-off between probe complexity and performance: the Pareto hypervolume. To measure complexity, we present a number of parametric and non-parametric metrics. Our experiments with such metrics show that probe{'}s performance curves often fail to align with widely accepted rankings between language representations (with, e.g., non-contextual representations outperforming contextual ones). These results lead us to argue, second, that common simplistic probe tasks such as POS labeling and dependency arc labeling, are inadequate to evaluate the properties encoded in contextual word representations. We propose full dependency parsing as an example probe task, and demonstrate it with the Pareto hypervolume. In support of our arguments, the results of this illustrative experiment conform closer to accepted rankings among contextual word representations.
|
|
URL: https://hdl.handle.net/20.500.11850/462313 https://doi.org/10.3929/ethz-b-000462313
|
|
BASE
|
|
Hide details
|
|
|
|